Week 9: Expectation Maximization

نویسنده

  • Sergey Levine
چکیده

Last week, we saw how we could represent clustering with a probabilistic model. In this model, called a Gaussian mixture model, we model each datapoint x i as originating from some cluster, with a corresponding cluster label y i distributed according to p(y), and the corresponding distribution for that cluster given by a multivariate Gaussian: p(x|y = k) =

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Recent Advances in Information Diffusion and Influence Maximization of Complex Social Networks

1.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Social Influence And Influence Maximization . . . . . . . . . . . . . . . . . . . . . . . . . . . ....

متن کامل

Generalized mixture models, semi-supervised learning, and unknown class inference

In this paper, we discuss generalized mixture models and related semi-supervised learning methods, and show how they can be used to provide explicit methods for unknown class inference. After a brief description of standard mixture modeling and current model-based semi-supervised learning methods, we provide the generalization and discuss its computational implementation using three-stage expec...

متن کامل

The Basic Idea of EM

4 The Expectation-Maximization algorithm 7 4.1 Jointly-non-concave incomplete log-likelihood . . . . . . . . . . . 7 4.2 (Possibly) Concave complete data log-likelihood . . . . . . . . . . 8 4.3 The general EM derivation . . . . . . . . . . . . . . . . . . . . . 9 4.4 The E& M-steps . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4.5 The EM algorithm . . . . . . . . . . . . . . . . . . ....

متن کامل

Deterministic Quantum Annealing Expectation-Maximization Algorithm

Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization appro...

متن کامل

A note on the maximization version of the multi-level facility location problem

We show that the maximization version of the multi-level facility location problem can be approximated by a factor of 0:5. The only previously known result is a factor of 0:47 for the two-level case obtained recently by Bumb (Oper. Res. Lett. 29(4) (2001) 155). c © 2002 Elsevier Science B.V. All rights reserved.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016